Telegram Group & Telegram Channel
💠 Compositional Learning Journal Club

Join us this week for an in-depth discussion on Unlearning in Deep generative models in the context of cutting-edge generative models. We will explore recent breakthroughs and challenges, focusing on how these models handle unlearning tasks and where improvements can be made.

This Week's Presentation:

🔹 Title: The Illusion of Unlearning: The Unstable Nature of Machine Unlearning in Text-to-Image Diffusion Models


🔸 Presenter: Aryan Komaei

🌀 Abstract:
This paper tackles a critical issue in text-to-image diffusion models like Stable Diffusion, DALL·E, and Midjourney. These models are trained on massive datasets, often containing private or copyrighted content, which raises serious legal and ethical concerns. To address this, machine unlearning methods have emerged, aiming to remove specific information from the models. However, this paper reveals a major flaw: these unlearned concepts can come back when the model is fine-tuned. The authors introduce a new framework to analyze and evaluate the stability of current unlearning techniques and offer insights into why they often fail, paving the way for more robust future methods.

Session Details:
- 📅 Date: Tuesday
- 🕒 Time: 11:00 - 12:00 PM
- 🌐 Location: Online at vc.sharif.edu/ch/rohban

We look forward to your participation! ✌️



tg-me.com/RIMLLab/213
Create:
Last Update:

💠 Compositional Learning Journal Club

Join us this week for an in-depth discussion on Unlearning in Deep generative models in the context of cutting-edge generative models. We will explore recent breakthroughs and challenges, focusing on how these models handle unlearning tasks and where improvements can be made.

This Week's Presentation:

🔹 Title: The Illusion of Unlearning: The Unstable Nature of Machine Unlearning in Text-to-Image Diffusion Models


🔸 Presenter: Aryan Komaei

🌀 Abstract:
This paper tackles a critical issue in text-to-image diffusion models like Stable Diffusion, DALL·E, and Midjourney. These models are trained on massive datasets, often containing private or copyrighted content, which raises serious legal and ethical concerns. To address this, machine unlearning methods have emerged, aiming to remove specific information from the models. However, this paper reveals a major flaw: these unlearned concepts can come back when the model is fine-tuned. The authors introduce a new framework to analyze and evaluate the stability of current unlearning techniques and offer insights into why they often fail, paving the way for more robust future methods.

Session Details:
- 📅 Date: Tuesday
- 🕒 Time: 11:00 - 12:00 PM
- 🌐 Location: Online at vc.sharif.edu/ch/rohban

We look forward to your participation! ✌️

BY RIML Lab


Warning: Undefined variable $i in /var/www/tg-me/post.php on line 283

Share with your friend now:
tg-me.com/RIMLLab/213

View MORE
Open in Telegram


RIML Lab Telegram | DID YOU KNOW?

Date: |

However, analysts are positive on the stock now. “We have seen a huge downside movement in the stock due to the central electricity regulatory commission’s (CERC) order that seems to be negative from 2014-15 onwards but we cannot take a linear negative view on the stock and further downside movement on the stock is unlikely. Currently stock is underpriced. Investors can bet on it for a longer horizon," said Vivek Gupta, director research at CapitalVia Global Research.

What is Telegram Possible Future Strategies?

Cryptoassets enthusiasts use this application for their trade activities, and they may make donations for this cause.If somehow Telegram do run out of money to sustain themselves they will probably introduce some features that will not hinder the rudimentary principle of Telegram but provide users with enhanced and enriched experience. This could be similar to features where characters can be customized in a game which directly do not affect the in-game strategies but add to the experience.

RIML Lab from id


Telegram RIML Lab
FROM USA